82 research outputs found

    Time-Efficient Read/Write Register in Crash-prone Asynchronous Message-Passing Systems

    Get PDF
    The atomic register is certainly the most basic object of computing science. Its implementation on top of an n-process asynchronous message-passing system has received a lot of attention. It has been shown that t \textless{} n/2 (where t is the maximal number of processes that may crash) is a necessary and sufficient requirement to build an atomic register on top of a crash-prone asynchronous message-passing system. Considering such a context, this paper visits the notion of a fast implementation of an atomic register, and presents a new time-efficient asynchronous algorithm. Its time-efficiency is measured according to two different underlying synchrony assumptions. Whatever this assumption, a write operation always costs a round-trip delay, while a read operation costs always a round-trip delay in favorable circumstances (intuitively, when it is not concurrent with a write). When designing this algorithm, the design spirit was to be as close as possible to the one of the famous ABD algorithm (proposed by Attiya, Bar-Noy, and Dolev)

    Self-stabilization Overhead: an Experimental Case Study on Coded Atomic Storage

    Full text link
    Shared memory emulation can be used as a fault-tolerant and highly available distributed storage solution or as a low-level synchronization primitive. Attiya, Bar-Noy, and Dolev were the first to propose a single-writer, multi-reader linearizable register emulation where the register is replicated to all servers. Recently, Cadambe et al. proposed the Coded Atomic Storage (CAS) algorithm, which uses erasure coding for achieving data redundancy with much lower communication cost than previous algorithmic solutions. Although CAS can tolerate server crashes, it was not designed to recover from unexpected, transient faults, without the need of external (human) intervention. In this respect, Dolev, Petig, and Schiller have recently developed a self-stabilizing version of CAS, which we call CASSS. As one would expect, self-stabilization does not come as a free lunch; it introduces, mainly, communication overhead for detecting inconsistencies and stale information. So, one would wonder whether the overhead introduced by self-stabilization would nullify the gain of erasure coding. To answer this question, we have implemented and experimentally evaluated the CASSS algorithm on PlanetLab; a planetary scale distributed infrastructure. The evaluation shows that our implementation of CASSS scales very well in terms of the number of servers, the number of concurrent clients, as well as the size of the replicated object. More importantly, it shows (a) to have only a constant overhead compared to the traditional CAS algorithm (which we also implement) and (b) the recovery period (after the last occurrence of a transient fault) is as fast as a few client (read/write) operations. Our results suggest that CASSS does not significantly impact efficiency while dealing with automatic recovery from transient faults and bounded size of needed resources

    Compositional Verification of Compiler Optimisations on Relaxed Memory

    Get PDF
    This paper is about verifying program transformations on an axiomatic relaxed memory model of the kind used in C/C++ and Java. Relaxed models present particular challenges for verifying program transformations, because they generate many additional modes of interaction between code and context. For a block of code being transformed, we define a denotation from its behaviour in a set of representative contexts. Our denotation summarises interactions of the code block with the rest of the program both through local and global variables, and through subtle synchronisation effects due to relaxed memory. We can then prove that a transformation does not introduce new program behaviours by comparing the denotations of the code block before and after. Our approach is compositional: by examining only representative contexts, transformations are verified for any context. It is also fully abstract, meaning any valid transformation can be verified. We cover several tricky aspects of C/C++-style memory models, including release-acquire operations, sequentially consistent fences, and non-atomics. We also define a variant of our denotation that is finite at the cost of losing full abstraction. Based on this variant, we have implemented a prototype verification tool and ap

    Conditions for Set Agreement with an Application to Synchronous Systems

    Get PDF
    The kk-set agreement problem is a generalization of the consensus problem: considering a system made up of nn processes where each process proposes a value, each non-faulty process has to decide a value such that a decided value is a proposed value, and no more than kk different values are decided. While this problem cannot be solved in an asynchronous system prone to tt process crashes when t≄kt \geq k, it can always be solved in a synchronous system; ⌊tk⌋+1\lfloor \frac{t}{k} \rfloor +1 is then a lower bound on the number of rounds (consecutive communication steps) for the non-faulty processes to decide. The {\it condition-based} approach has been introduced in the consensus context. Its aim was to both circumvent the consensus impossibility in asynchronous systems, and allow for more efficient consensus algorithms in synchronous systems. This paper addresses the condition-based approach in the context of the kk-set agreement problem. It has two main contributions. The first is the definition of a framework that allows defining conditions suited to the ℓ\ell-set agreement problem. More precisely, a condition is defined as a set of input vectors such that each of its input vectors can be seen as ``encoding'' ℓ\ell values, namely, the values that can be decided from that vector. A condition is characterized by the parameters tt, ℓ\ell, and a parameter denoted dd such that the greater d+ℓd+\ell, the least constraining the condition (i.e., it includes more and more input vectors when d+ℓd+\ell increases, and there is a condition that includes all the input vectors when d+ℓ>td+\ell>t). The conditions characterized by the triple of parameters tt, dd and ℓ\ell define the class of conditions denoted Std,ℓ{\cal S}_t^{d,\ell}, 0≀d≀t0\leq d\leq t, 1≀ℓ≀n−11\leq \ell \leq n-1 . The properties of the sets Std,ℓ{\cal S}_t^{d,\ell}are investigated, and it is shown that they have a lattice structure. The second contribution is a generic synchronous kk-set agreement algorithm based on a condition C∈Std,ℓC\in {\cal S}_t^{d,\ell}, i.e., a condition suitedto the ℓ\ell-set agreement problem, for ℓ≀k\ell \leq k. This algorithm requires at most ⌊d−1+ℓk⌋+1\left\lfloor \frac{d-1+\ell}{k} \right\rfloor +1 rounds when the input vector belongs to CC, and ⌊tk⌋+1\left\lfloor \frac{t}{k} \right\rfloor +1 rounds otherwise. (Interestingly, this algorithm includes as particular cases the classical synchronous kk-set agreement algorithm that requires ⌊tk⌋+1\left\lfloor \frac{t}{k} \right\rfloor+1 rounds (case d=td=t and ℓ=1\ell=1), and the synchronous consensus condition-based algorithm that terminates in d+1d+1 rounds when the input vector belongs to the condition, and in t+1t+1 rounds otherwise (case k=ℓ=1k=\ell=1).

    Stateful Multi-Client Verifiable Computation

    Get PDF
    This paper develops a cryptographic protocol for outsourcing arbitrary stateful computation among multiple clients to an untrusted server, while guaranteeing integrity of the data. The clients communicate only with the server and store only a short authenticator to ensure that the server does not cheat. Our contribution is two-fold. First, we extend the recent hash&prove scheme of Fiore et al. (CCS 2016) to stateful computations that support arbitrary updates by the untrusted server, in a way that can be verified by the clients. We use this scheme to generically instantiate authenticated data types. Second, we describe a protocol for multi-client verifiable computation based on an authenticated data type, and prove that it achieves a computational version of fork linearizability. This is the strongest guarantee that can be achieved in the setting where clients do not communicate directly; it ensures correctness and consistency of outputs seen by the clients individually

    Economic-demographic interactions in long-run growth

    Get PDF
    Cliometrics confirms that Malthus’ model of the pre-industrial economy, in which increases in productivity raise population but higher population drives down wages, is a good description for much of demographic/economic history. A contributor to the Malthusian equilibrium was the Western European Marriage Pattern, the late age of female first marriage, which promised to retard the fall of living standards by restricting fertility. The demographic transition and the transition from Malthusian economies to modern economic growth attracted many Cliometric models surveyed here. A popular model component is that lower levels of mortality over many centuries increased the returns to, or preference for, human capital investment so that technical progress eventually accelerated. This initially boosted birth rates and population growth accelerated. Fertility decline was earliest and most striking in late eighteenth century France. By the 1830s the fall in French marital fertility is consistent with a response to the rising opportunity cost of children. The rest of Europe did not begin to follow until end of the nineteenth century. Interactions between the economy and migration have been modelled with Cliometric structures closely related to those of natural increase and the economy. Wages were driven up by emigration from Europe and reduced in the economies receiving immigrants

    Why High-Performance Modelling and Simulation for Big Data Applications Matters

    Get PDF
    Modelling and Simulation (M&S) offer adequate abstractions to manage the complexity of analysing big data in scientific and engineering domains. Unfortunately, big data problems are often not easily amenable to efficient and effective use of High Performance Computing (HPC) facilities and technologies. Furthermore, M&S communities typically lack the detailed expertise required to exploit the full potential of HPC solutions while HPC specialists may not be fully aware of specific modelling and simulation requirements and applications. The COST Action IC1406 High-Performance Modelling and Simulation for Big Data Applications has created a strategic framework to foster interaction between M&S experts from various application domains on the one hand and HPC experts on the other hand to develop effective solutions for big data applications. One of the tangible outcomes of the COST Action is a collection of case studies from various computing domains. Each case study brought together both HPC and M&S experts, giving witness of the effective cross-pollination facilitated by the COST Action. In this introductory article we argue why joining forces between M&S and HPC communities is both timely in the big data era and crucial for success in many application domains. Moreover, we provide an overview on the state of the art in the various research areas concerned

    Current issues in medically assisted reproduction and genetics in Europe: research, clinical practice, ethics, legal issues and policy. European Society of Human Genetics and European Society of Human Reproduction and Embryology.

    Get PDF
    In March 2005, a group of experts from the European Society of Human Genetics and European Society of Human Reproduction and Embryology met to discuss the interface between genetics and assisted reproductive technology (ART), and published an extended background paper, recommendations and two Editorials. Seven years later, in March 2012, a follow-up interdisciplinary workshop was held, involving representatives of both professional societies, including experts from the European Union Eurogentest2 Coordination Action Project. The main goal of this meeting was to discuss developments at the interface between clinical genetics and ARTs. As more genetic causes of reproductive failure are now recognised and an increasing number of patients undergo testing of their genome before conception, either in regular health care or in the context of direct-to-consumer testing, the need for genetic counselling and preimplantation genetic diagnosis (PGD) may increase. Preimplantation genetic screening (PGS) thus far does not have evidence from randomised clinical trials to substantiate that the technique is both effective and efficient. Whole-genome sequencing may create greater challenges both in the technological and interpretational domains, and requires further reflection about the ethics of genetic testing in ART and PGD/PGS. Diagnostic laboratories should be reporting their results according to internationally accepted accreditation standards (International Standards Organisation - ISO 15189). Further studies are needed in order to address issues related to the impact of ART on epigenetic reprogramming of the early embryo. The legal landscape regarding assisted reproduction is evolving but still remains very heterogeneous and often contradictory. The lack of legal harmonisation and uneven access to infertility treatment and PGD/PGS fosters considerable cross-border reproductive care in Europe and beyond. The aim of this paper is to complement previous publications and provide an update of selected topics that have evolved since 2005

    Verifying Concurrent Data Structures Using Data-Expansion

    No full text
    • 

    corecore